35 research outputs found

    HoloBeam: Paper-Thin Near-Eye Displays

    Get PDF
    An emerging alternative to conventional Augmented Reality (AR) glasses designs, Beaming displays promise slim AR glasses free from challenging design trade-offs, including battery-related limits or computational budget-related issues. These beaming displays remove active components such as batteries and electronics from AR glasses and move them to a projector that projects images to a user from a distance (1-2 meters), where users wear only passive optical eyepieces. However, earlier implementations of these displays delivered poor resolutions (7 cycles per degree) without any optical focus cues and were introduced with a bulky form-factor eyepiece (50 mm thick). This paper introduces a new milestone for beaming displays, which we call HoloBeam. In this new design, a custom holographic projector populates a micro-volume located at some distance (1-2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with a submillimeter thickness (e.g., HOE film is only 120 um thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.Comment: 15 pages, 18 Figures, 1 Table, 1 Listin

    Flexible modeling of next-generation displays using a differentiable toolkit

    Get PDF
    We introduce an open-source toolkit for simulating optics and visual perception. The toolkit offers differentiable functions that ease the optimization process in design. In addition, this toolkit supports applications spanning from calculating holograms for holographic displays to foveation in computer graphics. We believe this toolkit offers a gateway to remove overheads in scientific research related to next-generation displays

    HoloHDR: Multi-color Holograms Improve Dynamic Range

    Full text link
    Holographic displays generate Three-Dimensional (3D) images by displaying single-color holograms time-sequentially, each lit by a single-color light source. However, representing each color one by one limits peak brightness and dynamic range in holographic displays. This paper introduces a new driving scheme, HoloHDR, for realizing higher dynamic range images in holographic displays. Unlike the conventional driving scheme, in HoloHDR, three light sources illuminate each displayed hologram simultaneously at various brightness levels. In this way, HoloHDR reconstructs a multiplanar three-dimensional target scene using consecutive multi-color holograms and persistence of vision. We co-optimize multi-color holograms and required brightness levels from each light source using a gradient descent-based optimizer with a combination of application-specific loss terms. We experimentally demonstrate that HoloHDR can increase the brightness levels in holographic displays up to three times with support for a broader dynamic range, unlocking new potentials for perceptual realism in holographic displays.Comment: 10 pages, 11 figure

    Dynamic model and control of a new quadrotor unmanned aerial vehicle with tilt-wing mechanism

    Get PDF
    In this work a dynamic model of a new quadrotor aerial vehicle that is equipped with a tilt-wing mechanism is presented. The vehicle has the capabilities of vertical take-off/landing (VTOL) like a helicopter and flying horizontal like an airplane. Dynamic model of the vehicle is derived both for vertical and horizontal flight modes using Newton-Euler formulation. An LQR controller for the vertical flight mode has also been developed and its performance has been tested with several simulations

    Modeling and position control of a new quad-rotor unmanned aerial vehicle with tilt-wing mechanism

    Get PDF
    In this work a dynamic model of a new quadrotor aerial vehicle that is equipped with a tilt-wing mechanism is presented. The vehicle has the capabilities of vertical take-off/landing (VTOL) like a helicopter and flying horizontal like an airplane. Dynamic model of the vehicle is derived both for vertical and horizontal flight modes using Newton-Euler formulation. An LQR controller for the vertical flight mode has also been developed and its performance has been tested with several simulations

    Mathematical modeling and vertical flight control of a tilt-wing UAV

    Get PDF
    This paper presents a mathematical model and vertical flight control algorithms for a new tilt-wing unmanned aerial vehicle (UAV). The vehicle is capable of vertical take-off and landing (VTOL). Due to its tilt-wing structure, it can also fly horizontally. The mathematical model of the vehicle is obtained using Newton-Euler formulation. A gravity compensated PID controller is designed for altitude control, and three PID controllers are designed for attitude stabilization of the vehicle. Performances of these controllers are found to be quite satisfactory as demonstrated by indoor and outdoor flight experiments

    Optical Gaze Tracking with Spatially-Sparse Single-Pixel Detectors

    Get PDF
    Gaze tracking is an essential component of next generation displays for virtual reality and augmented reality applications. Traditional camera-based gaze trackers used in next generation displays are known to be lacking in one or multiple of the following metrics: power consumption, cost, computational complexity, estimation accuracy, latency, and form-factor. We propose the use of discrete photodiodes and light-emitting diodes (LEDs) as an alternative to traditional camera-based gaze tracking approaches while taking all of these metrics into consideration. We begin by developing a rendering-based simulation framework for understanding the relationship between light sources and a virtual model eyeball. Findings from this framework are used for the placement of LEDs and photodiodes. Our first prototype uses a neural network to obtain an average error rate of 2.67{\deg} at 400Hz while demanding only 16mW. By simplifying the implementation to using only LEDs, duplexed as light transceivers, and more minimal machine learning model, namely a light-weight supervised Gaussian process regression algorithm, we show that our second prototype is capable of an average error rate of 1.57{\deg} at 250 Hz using 800 mW.Comment: 10 pages, 8 figures, published in IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202
    corecore